Federated Quantum Machine Learning
نویسندگان
چکیده
Distributed training across several quantum computers could significantly improve the time and if we share learned model, not data, it potentially data privacy as would happen where is located. One of potential schemes to achieve this property federated learning (FL), which consists clients or local nodes on their own a central node aggregate models collected from those nodes. However, best our knowledge, no work has been done in machine (QML) federation setting yet. In work, present hybrid quantum-classical although framework be generalized pure model. Specifically, consider neural network (QNN) coupled with classical pre-trained convolutional Our distributed scheme demonstrated almost same level trained model accuracies yet faster training. It demonstrates promising future research direction for scaling aspects.
منابع مشابه
Quantum adiabatic machine learning
We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the traini...
متن کاملQuantum-enhanced machine learning
The emerging field of quantum machine learning has the potential to substantially aid in the problems and scope of artificial intelligence. This is only enhanced by recent successes in the field of classical machine learning. In this work we propose an approach for the systematic treatment of machine learning, from the perspective of quantum information. Our approach is general and covers all t...
متن کاملQuantum machine learning
Machine learning techniques are applied for solving a large variety of practical problems. The tasks attacked by machine learning algorithms include classification, regression, pattern recognition, etc. Traditionally, machine learning algorithms are divided into two groups depending on the nature of training data: supervised and unsupervised. Supervised machine learning algorithms take on the i...
متن کاملStochastic, Distributed and Federated Optimization for Machine Learning
We study optimization algorithms for the finite sum problems frequently arising in machine learning applications. First, we propose novel variants of stochastic gradient descent with a variance reduction property that enables linear convergence for strongly convex objectives. Second, we study distributed setting, in which the data describing the optimization problem does not fit into a single c...
متن کاملFederated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Entropy
سال: 2021
ISSN: ['1099-4300']
DOI: https://doi.org/10.3390/e23040460